It looks like you're new here. If you want to get involved, click one of these buttons!
The debate over the role robots will play in the future of warfare is one that is taking place right now as the development of automated lethal technology is truly beginning to take shape. Predator drone style combat machines are just the tip of the iceberg for what is to come down the line of lethal weaponry and some are worried that when robots are calling the shots, things could get a little out of hand.
Recently there has been some debate at the U.N. about “killer robots,” with prominent scientists, researchers, and Human rights organizations all warning that this type of technology – lethal tech. that divorces the need for human control – could cause a slew of unintended consequence to the detriment of humanity.
Debra AI Prediction
Arguments
  Considerate: 93%  
  Substantial: 91%  
  Spelling & Grammar: 95%  
  Sentiment: Neutral  
  Avg. Grade Level: 10.86  
  Sources: 0  
  Relevant (Beta): 93%  
  Learn More About Debra
- Walt Disney
  Considerate: 68%  
  Substantial: 57%  
  Spelling & Grammar: 93%  
  Sentiment: Neutral  
  Avg. Grade Level: 10.64  
  Sources: 0  
  Relevant (Beta): 87%  
  Learn More About Debra
Security Robot ‘Drowns’ Itself in Fountain
  Considerate: 83%  
  Substantial: 27%  
  Spelling & Grammar: 90%  
  Sentiment: Neutral  
  Avg. Grade Level: 6.26  
  Sources: 2  
  Relevant (Beta): 82%  
  Learn More About Debra
  Considerate: 99%  
  Substantial: 66%  
  Spelling & Grammar: 90%  
  Sentiment: Positive  
  Avg. Grade Level: 11.2  
  Sources: 0  
  Relevant (Beta): 28%  
  Learn More About Debra
  Considerate: 89%  
  Substantial: 54%  
  Spelling & Grammar: 100%  
  Sentiment: Neutral  
  Avg. Grade Level: 12.38  
  Sources: 0  
  Relevant (Beta): 74%  
  Learn More About Debra
  Considerate: 89%  
  Substantial: 89%  
  Spelling & Grammar: 90%  
  Sentiment: Negative  
  Avg. Grade Level: 9.14  
  Sources: 0  
  Relevant (Beta): 80%  
  Learn More About Debra
  Considerate: 98%  
  Substantial: 49%  
  Spelling & Grammar: 87%  
  Sentiment: Neutral  
  Avg. Grade Level: 10.26  
  Sources: 0  
  Relevant (Beta): 74%  
  Learn More About Debra
I would say if it is ever a problem, it is likely a very long way off. At least 100-300 years. And here is my reasoning. We don't know how human, or even animal intelligence works, let alone the human brain. Is intelligence even well defined? We have some general ideas, but these have yet to translate to anything as intelligent as some invertebrates. I imagine a war against an AI opponent would likely be very easy to win, because so much falls outside of what they could currently be programmed to deal with, and they lack the broad mental skillsets of humans or even animals. They could initially start with effective ways to kill us, but the ingenuity of humans and chaos of the real world would quickly move the AI into situations they are not equipped to comprehend or deal with. This threat pales in comparison with the very real threat of actual humans who are actively doing harm to others, right now. And it will continue. Civil unrest and wars are and will continue to be common. People of various factions will continue to infiltrate public gatherings and use whatever is at their disposal to end the lives of their fellow human. Sharp objects from the kitchen. Unarmed transportation. Flammable liquids. Easily obtainable harsh chemicals. Manufactured weapons. Blunt objects. Sticks. Rocks. Their fists. The potential to cause real and more effective damage by an determined person of average intelligence trumps any threat by a well equipped super genius with a breakthrough in AI.
That's why part of me wonders why is everyone afraid of AI now? It's not even in it's infancy. It's only a developing concept. But I look out on my entire life, and the blood shed flows in torrents. In my own country, in my own adult lifetime, restrictions on civilians in response to actual attacks, if I had told people in 1990 about it, would have thought I was talking about dystopian fiction. What do you mean the government took over airport security and feels all passengers up or makes them take off their belts and shoes? Yet we face legal action if we offend the same type of people carried out these attacks? Something is rotten in Denmark.
  Considerate: 76%  
  Substantial: 91%  
  Spelling & Grammar: 91%  
  Sentiment: Negative  
  Avg. Grade Level: 9.2  
  Sources: 0  
  Relevant (Beta): 67%  
  Learn More About Debra
  Considerate: 94%  
  Substantial: 93%  
  Spelling & Grammar: 97%  
  Sentiment: Negative  
  Avg. Grade Level: 7.08  
  Sources: 0  
  Relevant (Beta): 98%  
  Learn More About Debra
  Considerate: 31%  
  Substantial: 13%  
  Spelling & Grammar: 80%  
  Sentiment: Neutral  
  Avg. Grade Level: 4.46  
  Sources: 0  
  Relevant (Beta): 92%  
  Learn More About Debra
First of all, Elon Musk is a very successful businessman and he knows a lot about economy, science and technology - but his philosophical musings tend to be somewhat superficial, so I would not take him as an authority in this matter.
Now, the dangers of being destroyed by our own technology getting out of hand are always there. However, alongside the technological evolution, our security measures are also evolving. Remember how in 90-s hackers had almost a free access to almost every computer on the planet, and nowadays they can only get in by tricking the users into disclosing their private data? The antivirus technology simply evolved much faster than the hacking techniques, and in the end the Internet space became much more secure. I think this is the case with automation as well: the security protocols have reached a very high reliability level, and where in the past we had regular worker deaths on factories, now we have automated constructs working full time with little-to-no maintenance needed and almost zero emergencies.
The proposal to establish governance over killer robots, however, is truly something that can lead to a dire end. When the governance is established, it has to be put in someone's hands. Suppose, the hands of the government, or a private company affiliated with the government. That company/government then has control over the robots, over their security protocols, over their deployment. How likely is it that this control will never be abused, that the robot killers far surpassing any other military division we can possibly have in their strength, will not be used against the people they were intended to protect?
What I see as a more realistic and viable solution is development of decentralized security protocols by private companies, independent analysis of them by scientists and programmers, and then deployment of the robots on the field once their reliability has been established. The more independent the deployed robots from the external intrusion, the less likely they are to be misused, and the more resistant they become to misuse in the first place.
Just like Musk's Teslas are not governed in any special way, despite being essentially automated cars that can in theory deal a lot of damage, robot killers should not be governed either. Technology itself is not nearly as scary, as what a human can do with it.
  Considerate: 91%  
  Substantial: 99%  
  Spelling & Grammar: 96%  
  Sentiment: Positive  
  Avg. Grade Level: 13.18  
  Sources: 0  
  Relevant (Beta): 92%  
  Learn More About Debra
  Considerate: 94%  
  Substantial: 92%  
  Spelling & Grammar: 92%  
  Sentiment: Positive  
  Avg. Grade Level: 10.26  
  Sources: 0  
  Relevant (Beta): 82%  
  Learn More About Debra